93 research outputs found

    On deconvolution of distribution functions

    Full text link
    The subject of this paper is the problem of nonparametric estimation of a continuous distribution function from observations with measurement errors. We study minimax complexity of this problem when unknown distribution has a density belonging to the Sobolev class, and the error density is ordinary smooth. We develop rate optimal estimators based on direct inversion of empirical characteristic function. We also derive minimax affine estimators of the distribution function which are given by an explicit convex optimization problem. Adaptive versions of these estimators are proposed, and some numerical results demonstrating good practical behavior of the developed procedures are presented.Comment: Published in at http://dx.doi.org/10.1214/11-AOS907 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Learning by mirror averaging

    Get PDF
    Given a finite collection of estimators or classifiers, we study the problem of model selection type aggregation, that is, we construct a new estimator or classifier, called aggregate, which is nearly as good as the best among them with respect to a given risk criterion. We define our aggregate by a simple recursive procedure which solves an auxiliary stochastic linear programming problem related to the original nonlinear one and constitutes a special case of the mirror averaging algorithm. We show that the aggregate satisfies sharp oracle inequalities under some general assumptions. The results are applied to several problems including regression, classification and density estimation.Comment: Published in at http://dx.doi.org/10.1214/07-AOS546 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Mirror Descent and Convex Optimization Problems With Non-Smooth Inequality Constraints

    Full text link
    We consider the problem of minimization of a convex function on a simple set with convex non-smooth inequality constraint and describe first-order methods to solve such problems in different situations: smooth or non-smooth objective function; convex or strongly convex objective and constraint; deterministic or randomized information about the objective and constraint. We hope that it is convenient for a reader to have all the methods for different settings in one place. Described methods are based on Mirror Descent algorithm and switching subgradient scheme. One of our focus is to propose, for the listed different settings, a Mirror Descent with adaptive stepsizes and adaptive stopping rule. This means that neither stepsize nor stopping rule require to know the Lipschitz constant of the objective or constraint. We also construct Mirror Descent for problems with objective function, which is not Lipschitz continuous, e.g. is a quadratic function. Besides that, we address the problem of recovering the solution of the dual problem

    Verifiable conditions of 1\ell_1-recovery of sparse signals with sign restrictions

    Full text link
    We propose necessary and sufficient conditions for a sensing matrix to be "s-semigood" -- to allow for exact 1\ell_1-recovery of sparse signals with at most ss nonzero entries under sign restrictions on part of the entries. We express the error bounds for imperfect 1\ell_1-recovery in terms of the characteristics underlying these conditions. Furthermore, we demonstrate that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-semigood. We concentrate on the properties of proposed verifiable sufficient conditions of ss-semigoodness and describe their limits of performance

    On Verifiable Sufficient Conditions for Sparse Signal Recovery via 1\ell_1 Minimization

    Full text link
    We propose novel necessary and sufficient conditions for a sensing matrix to be "ss-good" - to allow for exact 1\ell_1-recovery of sparse signals with ss nonzero entries when no measurement noise is present. Then we express the error bounds for imperfect 1\ell_1-recovery (nonzero measurement noise, nearly ss-sparse signal, near-optimal solution of the optimization problem yielding the 1\ell_1-recovery) in terms of the characteristics underlying these conditions. Further, we demonstrate (and this is the principal result of the paper) that these characteristics, although difficult to evaluate, lead to verifiable sufficient conditions for exact sparse 1\ell_1-recovery and to efficiently computable upper bounds on those ss for which a given sensing matrix is ss-good. We establish also instructive links between our approach and the basic concepts of the Compressed Sensing theory, like Restricted Isometry or Restricted Eigenvalue properties

    On local linearization of control systems

    Get PDF
    We consider the problem of topological linearization of smooth (C infinity or real analytic) control systems, i.e. of their local equivalence to a linear controllable system via point-wise transformations on the state and the control (static feedback transformations) that are topological but not necessarily differentiable. We prove that local topological linearization implies local smooth linearization, at generic points. At arbitrary points, it implies local conjugation to a linear system via a homeomorphism that induces a smooth diffeomorphism on the state variables, and, except at "strongly" singular points, this homeomorphism can be chosen to be a smooth mapping (the inverse map needs not be smooth). Deciding whether the same is true at "strongly" singular points is tantamount to solve an intriguing open question in differential topology

    Advances in low-memory subgradient optimization

    Get PDF
    One of the main goals in the development of non-smooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting non-smooth problems allows to use bundle-type algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of non-smooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradient-based low-memory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the black-box subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and non-smooth convex and quasi-convex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of non-smooth optimization. Special attention in this section is given to the adaptive step-size policy which aims to attain lowest complexity bounds. Unfortunately the non-differentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve non-smooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of non-smooth convex optimization algorithms for solution of huge-scale extremal problems we consider convex optimization problems with non-smooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primal-dual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitz-continuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasi-convex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of non-smooth convex optimization

    Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity

    Get PDF
    We study the problem of aggregation under the squared loss in the model of regression with deterministic design. We obtain sharp PAC-Bayesian risk bounds for aggregates defined via exponential weights, under general assumptions on the distribution of errors and on the functions to aggregate. We then apply these results to derive sparsity oracle inequalities
    corecore